Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 77
Filter
Add filters

Document Type
Year range
1.
Interactive Learning Environments ; : No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-20245175

ABSTRACT

Mobile application developers rely largely on user reviews for identifying issues in mobile applications and meeting the users' expectations. User reviews are unstructured, unorganized and very informal. Identifying and classifying issues by extracting required information from reviews is difficult due to a large number of reviews. To automate the process of classifying reviews many researchers have adopted machine learning approaches. Keeping in view, the rising demand for educational applications, especially during COVID-19, this research aims to automate Android application education reviews' classification and sentiment analysis using natural language processing and machine learning techniques. A baseline corpus comprising 13,000 records has been built by collecting reviews of more than 20 educational applications. The reviews were then manually labelled with respect to sentiment and issue types mentioned in each review. User reviews are classified into eight categories and various machine learning algorithms are applied to classify users' sentiments and issues of applications. The results demonstrate that our proposed framework achieved an accuracy of 97% for sentiment identification and an accuracy of 94% in classifying the most significant issues. Moreover, the interpretability of the model is verified by using the explainable artificial intelligence technique of local interpretable model-agnostic explanations. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

2.
Applied Sciences-Basel ; 13(10), 2023.
Article in English | Web of Science | ID: covidwho-20243645

ABSTRACT

A mortality prediction model can be a great tool to assist physicians in decision making in the intensive care unit (ICU) in order to ensure optimal allocation of ICU resources according to the patient's health conditions. The entire world witnessed a severe ICU patient capacity crisis a few years ago during the COVID-19 pandemic. Various widely utilized machine learning (ML) models in this research field can provide poor performance due to a lack of proper feature selection. Despite the fact that nature-based algorithms in other sectors perform well for feature selection, no comparative study on the performance of nature-based algorithms in feature selection has been conducted in the ICU mortality prediction field. Therefore, in this research, a comparison of the performance of ML models with and without feature selection was performed. In addition, explainable artificial intelligence (AI) was used to examine the contribution of features to the decision-making process. Explainable AI focuses on establishing transparency and traceability for statistical black-box machine learning techniques. Explainable AI is essential in the medical industry to foster public confidence and trust in machine learning model predictions. Three nature-based algorithms, namely the flower pollination algorithm (FPA), particle swarm algorithm (PSO), and genetic algorithm (GA), were used in this study. For the classification job, the most widely used and diversified classifiers from the literature were used, including logistic regression (LR), decision tree (DT) classifier, the gradient boosting (GB) algorithm, and the random forest (RF) algorithm. The Medical Information Mart for Intensive Care III (MIMIC-III) dataset was used to collect data on heart failure patients. On the MIMIC-III dataset, it was discovered that feature selection significantly improved the performance of the described ML models. Without applying any feature selection process on the MIMIC-III heart failure patient dataset, the accuracy of the four mentioned ML models, namely LR, DT, RF, and GB was 69.9%, 82.5%, 90.6%, and 91.0%, respectively, whereas with feature selection in combination with the FPA, the accuracy increased to 71.6%, 84.8%, 92.8%, and 91.1%, respectively, for the same dataset. Again, the FPA showed the highest area under the receiver operating characteristic (AUROC) value of 83.0% with the RF algorithm among all other algorithms utilized in this study. Thus, it can be concluded that the use of feature selection with FPA has a profound impact on the outcome of ML models. Shapley additive explanation (SHAP) was used in this study to interpret the ML models. SHAP was used in this study because it offers mathematical assurances for the precision and consistency of explanations. It is trustworthy and suitable for both local and global explanations. It was found that the features that were selected by SHAP as most important were also most common with the features selected by the FPA. Therefore, we hope that this study will help physicians to predict ICU mortality for heart failure patients with a limited number of features and with high accuracy.

3.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) ; 13741 LNCS:154-159, 2023.
Article in English | Scopus | ID: covidwho-20243449

ABSTRACT

Due to the recent COVID-19 pandemic, people tend to wear masks indoors and outdoors. Therefore, systems with face recognition, such as FaceID, showed a tendency of decline in accuracy. Consequently, many studies and research were held to improve the accuracy of the recognition system between masked faces. Most of them targeted to enhance dataset and restrained the models to get reasonable accuracies. However, not much research was held to explain the reasons for the enhancement of the accuracy. Therefore, we focused on finding an explainable reason for the improvement of the model's accuracy. First, we could see that the accuracy has actually increased after training with a masked dataset by 12.86%. Then we applied Explainable AI (XAI) to see whether the model has really focused on the regions of interest. Our approach showed through the generated heatmaps that difference in the data of the training models make difference in range of focus. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

4.
Artificial Intelligence in Covid-19 ; : 257-277, 2022.
Article in English | Scopus | ID: covidwho-20234592

ABSTRACT

During the COVID-19 pandemic it became evident that outcome prediction of patients is crucial for triaging, when resources are limited and enable early start or increase of available therapeutic support. COVID-19 demographic risk factors for severe disease and death were rapidly established, including age and sex. Common Clinical Decision Support Systems (CDSS) and Early Warning Systems (EWS) have been used to triage based on demographics, vital signs and laboratory results. However, all of these have limitations, such as dependency of laboratory investigations or set threshold values, were derived from more or less specific cohort studies. Instead, individual illness dynamics and patterns of recovery might be essential characteristics in understanding the critical course of illness.The pandemic has been a game changer for data, and the concept of real-time massive health data has emerged as one of the important tools in battling the pandemic. We here describe the advantages and limitations of established risk scoring systems and show how artificial intelligence applied on dynamic vital parameter changes, may help to predict critical illness, adverse events and death in patients hospitalized with COVID-19.Machine learning assisted dynamic analysis can improve and give patient-specific prediction in Clinical Decision Support systems that have the potential of reducing both morbidity and mortality. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022.

5.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12465, 2023.
Article in English | Scopus | ID: covidwho-20234381

ABSTRACT

Although many AI-based scientific works regarding chest X-ray (CXR) interpretation focused on COVID-19 diagnosis, fewer papers focused on other relevant tasks, like severity estimation, deterioration, and prognosis. The same holds for explainable decisions to estimate COVID-19 prognosis as well. The international hackathon launched during Dubai Expo 2020, aimed at designing machine learning solutions to help physicians formulate COVID-19 patients' prognosis, was the occasion to develop a machine learning model capable of predicting such prognoses and justifying them through interpretable explanations. The large hackathon dataset comprised subjects characterized by their CXR and numerous clinical features collected during triage. To calculate the prognostic value, our model considered both patients' CXRs and clinical features. After automatic pre-processing to improve their quality, CXRs were processed by a Deep Learning model to estimate the lung compromise degree, which has been considered as an additional clinical feature. Original clinical parameters suffered from missing values that were adequately handled. We trained and evaluated multiple models to find the best one and fine-tune it before the inference process. Finally, we produced novel explanations, both visual and numerical, to justify the model predictions. Ultimately, our model processes a CXR and several clinical data to estimate a patient's prognosis related to the COVID-19 disease. It proved to be accurate and was ranked second in the final rankings with 75%, 73.9%, and 74.4% in sensitivity, specificity, and balanced accuracy, respectively. In terms of model explainability, it was ranked first since it was agreed to be the most interpretable by health professionals. © 2023 SPIE.

6.
Journal of Business Research ; 158, 2023.
Article in English | Web of Science | ID: covidwho-2322649

ABSTRACT

While thousands of new mobile applications (i.e., apps) are being added to the major app markets daily, only a small portion of them attain their financial goals and survive in these competitive marketplaces. A key to the quick growth and success of relatively less popular apps is that they should make their way to the limited list of apps recommended to users of already popular apps;however, the focus of the current literature on consumers has created a void of design principles for app developers. In this study, employing a predictive network analytics approach combined with deep learning-based natural language processing and explainable artificial intelligence techniques, we shift the focus from consumers and propose a developer-oriented recommender model. We employ a set of app-specific and network-driven variables to present a novel approach for predicting potential recommendation relationships among apps, which enables app developers and marketers to characterize and target appropriate consumers. We validate the proposed model using a large (>23,000), longitudinal dataset of medical apps collected from the iOS App Store at two time points. From a total of 10,234 network links (rec-ommendations) formed between the two data collection points, the proposed approach was able to correctly predict 8,780 links (i.e., 85.8 %). We perform Shapley Additive exPlanation (SHAP) analysis to identify the most important determinants of link formations and provide insights for the app developers about the factors and design principles they can incorporate into their development process to maximize the chances of success for their apps.

7.
5th International Conference on Emerging Smart Computing and Informatics, ESCI 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2322372

ABSTRACT

Explainable AI (XAI) is one of the disciplines being investigated, with the goal of improving the transparency of black-box systems. XAI is such a technology that could assist to alleviate the black-box system by providing new ways of understanding the core thinking process of AI systems. Conside ring the healthcare domain, doctors are still not able to explain why certain decisions or forecasts had been predicted by a particular system. As a result, it imposes limitations on how and where AI technology can be implemented. And to address this problem, a taxonomy of model interpretability is framed for conceptualizing the explainability. Also, an approach with the baseline system is created which could firstly differentiate in the Covid-19 positive and Covid-19 negative chest X-ray images and an automated explainable pipeline is designed using XAI technique. This technique shows that the model is interpretable, that is the achieved results are easy to understand and can encourage medicians and patients with transparent and reliable medical journey. This article aims to help people comprehend the necessity for Explainable AI, as well as the methodological approaches used in healthcare. © 2023 IEEE.

8.
International Journal of Information Technology and Decision Making ; 22(3), 2023.
Article in English | ProQuest Central | ID: covidwho-2314833

ABSTRACT

In this research, an effort has been put to develop an integrated predictive modeling framework to automatically estimate the rental price of Airbnb units based on listed descriptions and several accommodation-related utilities. This paper considers approximately 0.2 million listings of Airbnb units across seven European cities, Amsterdam, Barcelona, Brussels, Geneva, Istanbul, London, and Milan, after the COVID-19 pandemic for predictive analysis. RoBERTa, a transfer learning framework in conjunction with K-means-based unsupervised text clustering, was used to form a homogeneous grouping of Airbnb units across the cities. Subsequently, particle swarm optimization (PSO) driven advanced ensemble machine learning frameworks have been utilized for predicting rental prices across the formed clusters of respective cities using 32 offer-related features. Additionally, explainable artificial intelligence (AI), an emerging field of AI, has been utilized to interpret the high-end predictive modeling to infer deeper insights into the nature and direction of influence of explanatory features on rental prices at respective locations. The rental prices of Airbnb units in Geneva and Brussels have appeared to be highly predictable, while the units in London and Milan have been found to be less predictable. Different types of amenity offerings largely explain the variation in rental prices across the cities.

9.
2022 Ieee International Geoscience and Remote Sensing Symposium (Igarss 2022) ; : 7705-7708, 2022.
Article in English | Web of Science | ID: covidwho-2311271

ABSTRACT

Links between environmental conditions (e.g., meteorological factors and air quality) and COVID-19 infection/mortality have been reported worldwide. However, the existing statistical frameworks are insufficient to investigate the factors that increase the risk for COVID-19 in urban areas. In this paper, we extend the concept of machine learning-based predictive modelling for COVID-19 spread, proposing an explainable AI approach in order to i) prioritize the risk factors, ii) define the interconnections between them and iii) detect positive or negative influence of the factors with respect to COVID-19 morbidity and mortality.

10.
Ieee Access ; 11:595-645, 2023.
Article in English | Web of Science | ID: covidwho-2311192

ABSTRACT

Biomedical image segmentation (BIS) task is challenging due to the variations in organ types, position, shape, size, scale, orientation, and image contrast. Conventional methods lack accurate and automated designs. Artificial intelligence (AI)-based UNet has recently dominated BIS. This is the first review of its kind that microscopically addressed UNet types by complexity, stratification of UNet by its components, addressing UNet in vascular vs. non-vascular framework, the key to segmentation challenge vs. UNet-based architecture, and finally interfacing the three facets of AI, the pruning, the explainable AI (XAI), and the AI-bias. PRISMA was used to select 267 UNet-based studies. Five classes were identified and labeled as conventional UNet, superior UNet, attention-channel UNet, hybrid UNet, and ensemble UNet. We discovered 81 variations of UNet by considering six kinds of components, namely encoder, decoder, skip connection, bridge network, loss function, and their combination. Vascular vs. non-vascular UNet architecture was compared. AP(ai)Bias 2.0-UNet was identified in these UNet classes based on (i) attributes of UNet architecture and its performance, (ii) explainable AI (XAI), and, (iii) pruning (compression). Five bias methods such as (i) ranking, (ii) radial, (iii) regional area, (iv) PROBAST, and (v) ROBINS-I were applied and compared using a Venn diagram. Vascular and non-vascular UNet systems dominated with sUNet classes with attention. Most of the studies suffered from a low interest in XAI and pruning strategies. None of the UNet models qualified to be bias-free. There is a need to move from paper-to-practice paradigms for clinical evaluation and settings.

11.
International Journal on Artificial Intelligence Tools ; 32(2), 2023.
Article in English | Scopus | ID: covidwho-2291274

ABSTRACT

This paper shows the added value of using the existing specific domain knowledge to generate new derivated variables to complement a target dataset and the benefits of including these new variables into further data analysis methods. The main contribution of the paper is to propose a methodology to generate these new variables as a part of preprocessing, under a double approach: creating 2nd generation knowledge-driven variables, catching the experts criteria used for reasoning on the field or 3rd generation data-driven indicators, these created by clustering original variables. And Data Mining and Artificial Intelligence techniques like Clustering or Traffic light Panels help to obtain successful results. Some results of the project INSESS-COVID19 are presented, basic descriptive analysis gives simple results that even though they are useful to support basic policy-making, especially in health, a much richer global perspective is acquired after including derivated variables. When 2nd generation variables are available and can be introduced in the method for creating 3rd generation data, added value is obtained from both basic analysis and building new data-driven indicators. © 2023 World Scientific Publishing Company.

12.
IEEE Transactions on Artificial Intelligence ; 4(2):242-254, 2023.
Article in English | Scopus | ID: covidwho-2306664

ABSTRACT

Since the onset of the COVID-19 pandemic in 2019, many clinical prognostic scoring tools have been proposed or developed to aid clinicians in the disposition and severity assessment of pneumonia. However, there is limited work that focuses on explaining techniques that are best suited for clinicians in their decision making. In this article, we present a new image explainability method named ensemble AI explainability (XAI), which is based on the SHAP and Grad-CAM++ methods. It provides a visual explanation for a deep learning prognostic model that predicts the mortality risk of community-acquired pneumonia and COVID-19 respiratory infected patients. In addition, we surveyed the existing literature and compiled prevailing quantitative and qualitative metrics to systematically review the efficacy of ensemble XAI, and to make comparisons with several state-of-the-art explainability methods (LIME, SHAP, saliency map, Grad-CAM, Grad-CAM++). Our quantitative experimental results have shown that ensemble XAI has a comparable absence impact (decision impact: 0.72, confident impact: 0.24). Our qualitative experiment, in which a panel of three radiologists were involved to evaluate the degree of concordance and trust in the algorithms, has showed that ensemble XAI has localization effectiveness (mean set accordance precision: 0.52, mean set accordance recall: 0.57, mean set F1: 0.50, mean set IOU: 0.36) and is the most trusted method by the panel of radiologists (mean vote: 70.2%). Finally, the deep learning interpretation dashboard used for the radiologist panel voting will be made available to the community. Our code is available at https://github.com/IHIS-HealthInsights/Interpretation-Methods-Voting-dashboard. © 2020 IEEE.

13.
EPJ Data Sci ; 12(1): 11, 2023.
Article in English | MEDLINE | ID: covidwho-2304414

ABSTRACT

Accurately forecasting patient arrivals at Urgent Care Clinics (UCCs) and Emergency Departments (EDs) is important for effective resourcing and patient care. However, correctly estimating patient flows is not straightforward since it depends on many drivers. The predictability of patient arrivals has recently been further complicated by the COVID-19 pandemic conditions and the resulting lockdowns. This study investigates how a suite of novel quasi-real-time variables like Google search terms, pedestrian traffic, the prevailing incidence levels of influenza, as well as the COVID-19 Alert Level indicators can both generally improve the forecasting models of patient flows and effectively adapt the models to the unfolding disruptions of pandemic conditions. This research also uniquely contributes to the body of work in this domain by employing tools from the eXplainable AI field to investigate more deeply the internal mechanics of the models than has previously been done. The Voting ensemble-based method combining machine learning and statistical techniques was the most reliable in our experiments. Our study showed that the prevailing COVID-19 Alert Level feature together with Google search terms and pedestrian traffic were effective at producing generalisable forecasts. The implications of this study are that proxy variables can effectively augment standard autoregressive features to ensure accurate forecasting of patient flows. The experiments showed that the proposed features are potentially effective model inputs for preserving forecast accuracies in the event of future pandemic outbreaks.

14.
Int J Data Sci Anal ; : 1-14, 2022 Apr 30.
Article in English | MEDLINE | ID: covidwho-2291834

ABSTRACT

The world is witnessing the devastating effects of the COVID-19 pandemic. Each country responded to contain the spread of the virus in the early stages through diverse response measures. Interpreting these responses and their patterns globally is essential to inform future responses to COVID-19 variants and future pandemics. A stochastic epidemiological model (SEM) is a well-established mathematical tool that helps to analyse the spread of infectious diseases through communities and the effects of various response measures. However, interpreting the outcome of these models is complex and often requires manual effort. In this paper, we propose a novel method to provide the explainability of an epidemiological model. We represent the output of SEM as a tensor model. We then apply nonnegative tensor factorization (NTF) to identify patterns of global response behaviours of countries and cluster the countries based on these patterns. We interpret the patterns and clusters to understand the global response behaviour of countries in the early stages of the pandemic. Our experimental results demonstrate the advantage of clustering using NTF and provide useful insights into the characteristics of country clusters.

15.
6th IEEE International Conference on Computational System and Information Technology for Sustainable Solutions, CSITSS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2274227

ABSTRACT

Artificial Intelligence is becoming more advanced with increasing complexity in generating the predictions and as a result it is becoming more challenging for the users to understand and retrace how the algorithm is predicting the outcomes. Artificial intelligence has also been contributing in making decisions. There are many flowers in the world so the botanist scientists need help in identifying or recognizing which type of flower. The paper presents an x-ray diagnostic model and the explained with Local interpretable model-agnostic explanations LIME method. The model is trained with various COVID as well as non-COVID images. Whereas chest X-rays are segmented to extract the lungs and the model predictions are tested with perturbated images that are generated using LIME. This paper opens a wide area of research in the field of XAI. © 2022 IEEE.

16.
8th Future of Information and Computing Conference, FICC 2023 ; 651 LNNS:659-675, 2023.
Article in English | Scopus | ID: covidwho-2269331

ABSTRACT

Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image-to-label result provide insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. Class activation maps are a method of providing insight into a convolutional neural network's feature maps that lead to its classification but in the case of lung diseases, the region of concern is only the lungs. Therefore, the proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray's class activation map to provide a visualization that improves the explainability and trust of an AI's diagnosis by focusing on a model's weights within the region of concern. The proposed U-Net model achieves 97.72% accuracy and a dice coefficient of 0.9691 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

17.
9th International Symposium on Applied Computing for Software and Smart systems, ACSS 2022 ; 555:227-234, 2023.
Article in English | Scopus | ID: covidwho-2261125

ABSTRACT

Stress is one of the major health issues of the world and one of the major reasons for committing suicide. Also, it leads to other mental health issues such as depression, anxiety etc., and damage to organs related to respiratory, cardiovascular and nervous systems. In recent years, stress has impacted many individuals due to the pandemic situation. Since the governments across the globe had started to impose lockdowns, the levels of stress significantly raised because of the disturbances led by covid infections, losing loved ones, continuous engagement with laptops and mobiles etc. It is also found that stress has not only disturbed the health condition but also disturbed the relationships and became a self-destruction component. This project is aimed to help those people to understand their stress and consult a psychologist at right time to overcome the situation. Though stress is an active area of research and achieved high performance of models, those were based on signal and speech which were computationally costlier and text-based research work using a state-of-the-art model called the BERT has achieved an f1-score i.e. 80.65%. This project focuses on text-domain and uses open-sourced Stress Analysis on Social Media dataset available on Kaggle which contains 3.6 K samples. In this project, both Machine Learning and Deep Learning Models were trained with 80% of the data and validated with 20% of the data. After, optimization and evaluation of several models, the best model has achieved a benchmark result of 83.74% f1-score on test data using a new network architecture i.e. combination of stacked Transformer Encoder layers with stacked Bi-directional-LSTM. In addition to this, an explainable AI has been implemented for an embedding layer to inspect input attributions in predicting the results. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

18.
2022 International Conference on Data Science, Agents and Artificial Intelligence, ICDSAAI 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2259998

ABSTRACT

Due to pandemic vaccines are developed at a rapid pace. There is a requirement to ensure proper post-market pharmacovigilance. The proposed model will help speed up this process by classifying the Adverse Drug Reactions (ADRs) of the vaccines based on the severity. This will help vaccine manufacturers take necessary and timely action. The model will input the patient data (such as symptoms, vaccination details, and patient health details), which will be preprocessed and cleaned. The ADR will then be classified as a minor, major, or deadly reaction. The system made use of Count Vectors (CV), Word TF-IDF, N-gram TFIDF, and Character TF-IDF feature with Naive Bayes, Random Forest, Logistic Regression, Gradient Boost, and Adaboost machine learning classifiers. Using Random Forest with word-level TF-IDF comparatively a higher accuracy of 93.83% and an F1 score of 0.92 was achieved. © 2022 IEEE.

19.
2022 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2247150

ABSTRACT

Explainability is important in the design and deployment of neural networks. It allows engineers to design better models and can give end-users an improved understanding of the outputs. However, many explainability methods are unsuited to the domain of medical imaging. Saliency mapping methods only describe what regions of an input image contributed to the output, but don't explain the important visual features within those regions. Feature visualization methods have not yet been useful in the domain of medical imaging due to the visual complexity of images generally resulting in un-interpretable features. In this work, we propose a novel explainability technique called 'Class Specific Semantic Dictionaries'. This extends saliency mapping and feature visualisation methods to enable the analysis of neural network decision-making in the context of medical image diagnosis. By utilising gradient information from the fully connected layers, our approach is able to give insight into the channels deemed important by the network for the diagnosis of each particular disease. The important channels for a class are contextualised by showing the highly activating examples from the training data, providing an understanding of the learned features through example. The explainability techniques are combined into a single User Interface (UI) to streamline the evaluation of neural networks. To demonstrate how our new method overcomes the explainability challenges of medical imaging models we analyse COVID-Net, an open source convolutional neural network for diagnosing COVID-19 from chest x-rays. We present evidence that, despite achieving 96.3% accuracy on the test data, COVID-Net uses confounding variables not indicative of underlying disease to discriminate between COVID-Positive and COVID-Negative patients and may not generalise well on new data. © 2022 IEEE.

20.
Biomed Signal Process Control ; 84: 104695, 2023 Jul.
Article in English | MEDLINE | ID: covidwho-2254992

ABSTRACT

Lung diseases lead to complications from obstructive diseases, and the COVID-19 pandemic has increased lung disease-related deaths. Medical practitioners use stethoscopes to diagnose lung disease. However, an artificial intelligence model capable of objective judgment is required since the experience and diagnosis of respiratory sounds differ. Therefore, in this study, we propose a lung disease classification model that uses an attention module and deep learning. Respiratory sounds were extracted using log-Mel spectrogram MFCC. Normal and five types of adventitious sounds were effectively classified by improving VGGish and adding a light attention connected module to which the efficient channel attention module (ECA-Net) was applied. The performance of the model was evaluated for accuracy, precision, sensitivity, specificity, f1-score, and balanced accuracy, which were 92.56%, 92.81%, 92.22%, 98.50%, 92.29%, and 95.4%, respectively. We confirmed high performance according to the attention effect. The classification causes of lung diseases were analyzed using gradient-weighted class activation mapping (Grad-CAM), and the performances of their models were compared using open lung sounds measured using a Littmann 3200 stethoscope. The experts' opinions were also included. Our results will contribute to the early diagnosis and interpretation of diseases in patients with lung disease by utilizing algorithms in smart medical stethoscopes.

SELECTION OF CITATIONS
SEARCH DETAIL